Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
4th International Conference on Innovative Trends in Information Technology, ICITIIT 2023 ; 2023.
Article in English | Scopus | ID: covidwho-2303387

ABSTRACT

In this study, we analyse the impact of the Universal Adversarial Perturbation Attack on the Inception-ResNet-v1 model using the lung CT scan dataset for COVID-19 classification and the retinal OCT scan dataset for Diabetic Macular Edema (DME) classification. The effectiveness of adversarial retraining as a suitable defense mechanism against this attack is examined. This study is categorised into three sections-The implementation of the Inception-ResNet-v1 model, the effect of the attack and the adversarial retraining. © 2023 IEEE.

2.
Imaging Science Journal ; 2023.
Article in English | Scopus | ID: covidwho-2266261

ABSTRACT

With growing demands for diagnosing COVID-19 definite cases, employing radiological images, i.e., the chest X-ray, is becoming challenging. Deep Convolutional Neural Networks (DCNN) propose effective automated models to detect COVID_19 positive cases. In order to improve the total accuracy, this paper proposes using the novel Trigonometric Function (TF) instead of the existing gradient descendent-based training method for training fully connected layers to have a COVID-19 detector with parallel implementation ability. The designed model gets then benchmarked on a verified dataset denominated COVID-Xray-5k. The results get investigated by qualified research with classic DCNN, BWC, and MSAD. The results confirm that the produced detector can present competitive results compared to the benchmark detection models. The paper also examines the class activation map theory to detect the areas probably infected by the Covid-19 virus. As experts confirm, the obtained results get correlated with the clinical recognitions. © 2023 The Royal Photographic Society.

3.
Soft comput ; : 1-20, 2021 May 10.
Article in English | MEDLINE | ID: covidwho-2287010

ABSTRACT

The COVID19 pandemic globally and significantly has affected the life and health of many communities. The early detection of infected patients is effective in fighting COVID19. Using radiology (X-Ray) images is, perhaps, the fastest way to diagnose the patients. Thereby, deep Convolutional Neural Networks (CNNs) can be considered as applicable tools to diagnose COVID19 positive cases. Due to the complicated architecture of a deep CNN, its real-time training and testing become a challenging problem. This paper proposes using the Extreme Learning Machine (ELM) instead of the last fully connected layer to address this deficiency. However, the parameters' stochastic tuning of ELM's supervised section causes the final model unreliability. Therefore, to cope with this problem and maintain network reliability, the sine-cosine algorithm was utilized to tune the ELM's parameters. The designed network is then benchmarked on the COVID-Xray-5k dataset, and the results are verified by a comparative study with canonical deep CNN, ELM optimized by cuckoo search, ELM optimized by genetic algorithm, and ELM optimized by whale optimization algorithm. The proposed approach outperforms comparative benchmarks with a final accuracy of 98.83% on the COVID-Xray-5k dataset, leading to a relative error reduction of 2.33% compared to a canonical deep CNN. Even more critical, the designed network's training time is only 0.9421 ms and the overall detection test time for 3100 images is 2.721 s.

4.
Diagnostics (Basel) ; 13(1)2023 Jan 03.
Article in English | MEDLINE | ID: covidwho-2199870

ABSTRACT

Chest X-ray radiography (CXR) is among the most frequently used medical imaging modalities. It has a preeminent value in the detection of multiple life-threatening diseases. Radiologists can visually inspect CXR images for the presence of diseases. Most thoracic diseases have very similar patterns, which makes diagnosis prone to human error and leads to misdiagnosis. Computer-aided detection (CAD) of lung diseases in CXR images is among the popular topics in medical imaging research. Machine learning (ML) and deep learning (DL) provided techniques to make this task more efficient and faster. Numerous experiments in the diagnosis of various diseases proved the potential of these techniques. In comparison to previous reviews our study describes in detail several publicly available CXR datasets for different diseases. It presents an overview of recent deep learning models using CXR images to detect chest diseases such as VGG, ResNet, DenseNet, Inception, EfficientNet, RetinaNet, and ensemble learning methods that combine multiple models. It summarizes the techniques used for CXR image preprocessing (enhancement, segmentation, bone suppression, and data-augmentation) to improve image quality and address data imbalance issues, as well as the use of DL models to speed-up the diagnosis process. This review also discusses the challenges present in the published literature and highlights the importance of interpretability and explainability to better understand the DL models' detections. In addition, it outlines a direction for researchers to help develop more effective models for early and automatic detection of chest diseases.

5.
PeerJ Comput Sci ; 8: e1031, 2022.
Article in English | MEDLINE | ID: covidwho-1979623

ABSTRACT

Deep convolutional neural networks (CNN) manifest the potential for computer-aided diagnosis systems (CADs) by learning features directly from images rather than using traditional feature extraction methods. Nevertheless, due to the limited sample sizes and heterogeneity in tumor presentation in medical images, CNN models suffer from training issues, including training from scratch, which leads to overfitting. Alternatively, a pre-trained neural network's transfer learning (TL) is used to derive tumor knowledge from medical image datasets using CNN that were designed for non-medical activations, alleviating the need for large datasets. This study proposes two ensemble learning techniques: E-CNN (product rule) and E-CNN (majority voting). These techniques are based on the adaptation of the pretrained CNN models to classify colon cancer histopathology images into various classes. In these ensembles, the individuals are, initially, constructed by adapting pretrained DenseNet121, MobileNetV2, InceptionV3, and VGG16 models. The adaptation of these models is based on a block-wise fine-tuning policy, in which a set of dense and dropout layers of these pretrained models is joined to explore the variation in the histology images. Then, the models' decisions are fused via product rule and majority voting aggregation methods. The proposed model was validated against the standard pretrained models and the most recent works on two publicly available benchmark colon histopathological image datasets: Stoean (357 images) and Kather colorectal histology (5,000 images). The results were 97.20% and 91.28% accurate, respectively. The achieved results outperformed the state-of-the-art studies and confirmed that the proposed E-CNNs could be extended to be used in various medical image applications.

6.
Cognit Comput ; 14(5): 1752-1772, 2022.
Article in English | MEDLINE | ID: covidwho-1943282

ABSTRACT

Novel coronavirus disease (COVID-19) is an extremely contagious and quickly spreading coronavirus infestation. Severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS), which outbreak in 2002 and 2011, and the current COVID-19 pandemic are all from the same family of coronavirus. This work aims to classify COVID-19, SARS, and MERS chest X-ray (CXR) images using deep convolutional neural networks (CNNs). To the best of our knowledge, this classification scheme has never been investigated in the literature. A unique database was created, so-called QU-COVID-family, consisting of 423 COVID-19, 144 MERS, and 134 SARS CXR images. Besides, a robust COVID-19 recognition system was proposed to identify lung regions using a CNN segmentation model (U-Net), and then classify the segmented lung images as COVID-19, MERS, or SARS using a pre-trained CNN classifier. Furthermore, the Score-CAM visualization method was utilized to visualize classification output and understand the reasoning behind the decision of deep CNNs. Several deep learning classifiers were trained and tested; four outperforming algorithms were reported: SqueezeNet, ResNet18, InceptionV3, and DenseNet201. Original and preprocessed images were used individually and all together as the input(s) to the networks. Two recognition schemes were considered: plain CXR classification and segmented CXR classification. For plain CXRs, it was observed that InceptionV3 outperforms other networks with a 3-channel scheme and achieves sensitivities of 99.5%, 93.1%, and 97% for classifying COVID-19, MERS, and SARS images, respectively. In contrast, for segmented CXRs, InceptionV3 outperformed using the original CXR dataset and achieved sensitivities of 96.94%, 79.68%, and 90.26% for classifying COVID-19, MERS, and SARS images, respectively. The classification performance degrades with segmented CXRs compared to plain CXRs. However, the results are more reliable as the network learns from the main region of interest, avoiding irrelevant non-lung areas (heart, bones, or text), which was confirmed by the Score-CAM visualization. All networks showed high COVID-19 detection sensitivity (> 96%) with the segmented lung images. This indicates the unique radiographic signature of COVID-19 cases in the eyes of AI, which is often a challenging task for medical doctors.

7.
Multimed Tools Appl ; 81(12): 16411-16439, 2022.
Article in English | MEDLINE | ID: covidwho-1826736

ABSTRACT

In such a brief period, the recent coronavirus (COVID-19) already infected large populations worldwide. Diagnosing an infected individual requires a Real-Time Polymerase Chain Reaction (RT-PCR) test, which can become expensive and limited in most developing countries, making them rely on alternatives like Chest X-Rays (CXR) or Computerized Tomography (CT) scans. However, results from these imaging approaches radiated confusion for medical experts due to their similarities with other diseases like pneumonia. Other solutions based on Deep Convolutional Neural Network (DCNN) recently improved and automated the diagnosis of COVID-19 from CXRs and CT scans. However, upon examination, most proposed studies focused primarily on accuracy rather than deployment and reproduction, which may cause them to become difficult to reproduce and implement in locations with inadequate computing resources. Therefore, instead of focusing only on accuracy, this work investigated the effects of parameter reduction through a proposed truncation method and analyzed its effects. Various DCNNs had their architectures truncated, which retained only their initial core block, reducing their parameter sizes to <1 M. Once trained and validated, findings have shown that a DCNN with robust layer aggregations like the InceptionResNetV2 had less vulnerability to the adverse effects of the proposed truncation. The results also showed that from its full-length size of 55 M with 98.67% accuracy, the proposed truncation reduced its parameters to only 441 K and still attained an accuracy of 97.41%, outperforming other studies based on its size to performance ratio.

8.
15th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC) / 9th International Conference on Bioimaging (BIOIMAGING) ; : 160-167, 2022.
Article in English | Web of Science | ID: covidwho-1798805

ABSTRACT

COVID-19 is a recently emerged pneumonia disease with threatening complications that can be avoided by early diagnosis. Deep learning (DL) multimodality fusion is rapidly becoming state of the art, leading to enhanced performance in various medical applications such as cognitive impairment diseases and lung cancer. In this paper, for COVID-19 detection, seven deep learning models (VGG19, DenseNet121, InceptionV3, InceptionResNetV2, Xception, ResNet50V2, and MobileNetV2) using single-modality and joint fusion were empirically examined and contrasted in terms of accuracy, area under the curve, sensitivity, specificity, precision, and Fl-score with Scott-Knott Effect Size Difference statistical test and Borda Count voting method. The empirical evaluations were conducted over two datasets: COVID-19 Radiography Database and COVID-CT using 5-fold cross validation. Results showed that MobileNetV2 was the best performing and less sensitive technique on the two datasets using mono-modality with an accuracy value of 78% for Computed Tomography (CT) and 92% for Chest X-Ray (CXR) modalities. Joint fusion outperformed mono-modality DL techniques, with MobileNetV2, ResNet50V2 and InceptionResNetV2 joint fusion as the best performing for COVID-19 diagnosis with an accuracy of 99%. Therefore, we recommend the use of the joint fusion DL models MobileNetV2, ResNet50V2 and InceptionResNetV2 for the detection of COVID-19. As for monomodality, MobileNetV2 was the best in performance and less sensitive model to the two imaging modalities.

9.
Biomed Signal Process Control ; 73: 103441, 2022 Mar.
Article in English | MEDLINE | ID: covidwho-1549668

ABSTRACT

Today, the earth planet suffers from the decay of active pandemic COVID-19 which motivates scientists and researchers to detect and diagnose the infected people. Chest X-ray (CXR) image is a common utility tool for detection. Even the CXR suffers from low informative details about COVID-19 patches; the computer vision helps to overcome it through grayscale spatial exploitation analysis. In turn, it is highly recommended to acquire more CXR images to increase the capacity and ability to learn for mining the grayscale spatial exploitation. In this paper, an efficient Gray-scale Spatial Exploitation Net (GSEN) is designed by employing web pages crawling across cloud computing environments. The motivation of this work are i) utilizing a framework methodology for constructing consistent dataset by web crawling to update the dataset continuously per crawling iteration; ii) designing lightweight, fast learning, comparable accuracy, and fine-tuned parameters gray-scale spatial exploitation deep neural net; iii) comprehensive evaluation of the designed gray-scale spatial exploitation net for different collected dataset(s) based on web COVID-19 crawling verse the transfer learning of the pre-trained nets. Different experiments have been performed for benchmarking both the proposed web crawling framework methodology and the designed gray-scale spatial exploitation net. Due to the accuracy metric, the proposed net achieves 95.60% for two-class labels, and 92.67% for three-class labels, respectively compared with the most recent transfer learning Google-Net, VGG-19, Res-Net 50, and Alex-Net approaches. Furthermore, web crawling utilizes the accuracy rates improvement in a positive relationship to the cardinality of crawled CXR dataset.

10.
Wirel Pers Commun ; 124(2): 1355-1374, 2022.
Article in English | MEDLINE | ID: covidwho-1549505

ABSTRACT

The early diagnosis and the accurate separation of COVID-19 from non-COVID-19 cases based on pulmonary diffuse airspace opacities is one of the challenges facing researchers. Recently, researchers try to exploit the Deep Learning (DL) method's capability to assist clinicians and radiologists in diagnosing positive COVID-19 cases from chest X-ray images. In this approach, DL models, especially Deep Convolutional Neural Networks (DCNN), propose real-time, automated effective models to detect COVID-19 cases. However, conventional DCNNs usually use Gradient Descent-based approaches for training fully connected layers. Although GD-based Training (GBT) methods are easy to implement and fast in the process, they demand numerous manual parameter tuning to make them optimal. Besides, the GBT's procedure is inherently sequential, thereby parallelizing them with Graphics Processing Units is very difficult. Therefore, for the sake of having a real-time COVID-19 detector with parallel implementation capability, this paper proposes the use of the Whale Optimization Algorithm for training fully connected layers. The designed detector is then benchmarked on a verified dataset called COVID-Xray-5k, and the results are verified by a comparative study with classic DCNN, DUICM, and Matched Subspace classifier with Adaptive Dictionaries. The results show that the proposed model with an average accuracy of 99.06% provides 1.87% better performance than the best comparison model. The paper also considers the concept of Class Activation Map to detect the regions potentially infected by the virus. This was found to correlate with clinical results, as confirmed by experts. Although results are auspicious, further investigation is needed on a larger dataset of COVID-19 images to have a more comprehensive evaluation of accuracy rates.

11.
Comput Biol Med ; 138: 104930, 2021 11.
Article in English | MEDLINE | ID: covidwho-1458652

ABSTRACT

Respiratory illness is the primary cause of mortality and impairment in the life span of an individual in the current COVID-19 pandemic scenario. The inability to inhale and exhale is one of the difficult conditions for a person suffering from respiratory disorders. Unfortunately, the diagnosis of respiratory disorders with the presently available imaging and auditory screening modalities are sub-optimal and the accuracy of diagnosis varies with different medical experts. At present, deep neural nets demand a massive amount of data suitable for precise models. In reality, the respiratory data set is quite limited, and therefore, data augmentation (DA) is employed to enlarge the data set. In this study, conditional generative adversarial networks (cGAN) based DA is utilized for synthetic generation of signals. The publicly available repository such as ICBHI 2017 challenge, RALE and Think Labs Lung Sounds Library are considered for classifying the respiratory signals. To assess the efficacy of the artificially created signals by the DA approach, similarity measures are calculated between original and augmented signals. After that, to quantify the performance of augmentation in classification, scalogram representation of generated signals are fed as input to different pre-trained deep learning architectures viz Alexnet, GoogLeNet and ResNet-50. The experimental results are computed and performance results are compared with existing classical approaches of augmentation. The research findings conclude that the proposed cGAN method of augmentation provides better accuracy of 92.50% and 92.68%, respectively for both the two data sets using ResNet 50 model.


Subject(s)
COVID-19 , Pandemics , Humans , Lung , Neural Networks, Computer , SARS-CoV-2
12.
MethodsX ; 8: 101408, 2021.
Article in English | MEDLINE | ID: covidwho-1253393

ABSTRACT

Deep learning and computer vision revolutionized a new method to automate medical image diagnosis. However, to achieve reliable and state-of-the-art performance, vision-based models require high computing costs and robust datasets. Moreover, even with the conventional training methods, large vision-based models still involve lengthy epochs and costly disk consumptions that can entail difficulty during deployment due to the absence of high-end infrastructures. Therefore, this method modified the training approach on a vision-based model through layer truncation, partial layer freezing, and feature fusion. The proposed method was employed on a Densely Connected Convolutional Neural Network (CNN), the DenseNet model, to diagnose whether a Chest X-Ray (CXR) is well, has Pneumonia, or has COVID-19. From the results, the performance to parameter size ratio highlighted this method's effectiveness to train a DenseNet model with fewer parameters compared to traditionally trained state-of-the-art Deep CNN (DCNN) models, yet yield promising results.•This novel method significantly reduced the model's parameter size without sacrificing much of its classification performance.•The proposed method had better performance against some state-of-the-art Deep Convolutional Neural Network (DCNN) models that diagnosed samples of CXRs with COVID-19.•The proposed method delivered a conveniently scalable, reproducible, and deployable DCNN model for most low-end devices.

13.
Comput Struct Biotechnol J ; 19: 2833-2850, 2021.
Article in English | MEDLINE | ID: covidwho-1240272

ABSTRACT

The worldwide health crisis caused by the SARS-Cov-2 virus has resulted in>3 million deaths so far. Improving early screening, diagnosis and prognosis of the disease are critical steps in assisting healthcare professionals to save lives during this pandemic. Since WHO declared the COVID-19 outbreak as a pandemic, several studies have been conducted using Artificial Intelligence techniques to optimize these steps on clinical settings in terms of quality, accuracy and most importantly time. The objective of this study is to conduct a systematic literature review on published and preprint reports of Artificial Intelligence models developed and validated for screening, diagnosis and prognosis of the coronavirus disease 2019. We included 101 studies, published from January 1st, 2020 to December 30th, 2020, that developed AI prediction models which can be applied in the clinical setting. We identified in total 14 models for screening, 38 diagnostic models for detecting COVID-19 and 50 prognostic models for predicting ICU need, ventilator need, mortality risk, severity assessment or hospital length stay. Moreover, 43 studies were based on medical imaging and 58 studies on the use of clinical parameters, laboratory results or demographic features. Several heterogeneous predictors derived from multimodal data were identified. Analysis of these multimodal data, captured from various sources, in terms of prominence for each category of the included studies, was performed. Finally, Risk of Bias (RoB) analysis was also conducted to examine the applicability of the included studies in the clinical setting and assist healthcare providers, guideline developers, and policymakers.

14.
Biomed Signal Process Control ; 68: 102764, 2021 Jul.
Article in English | MEDLINE | ID: covidwho-1230385

ABSTRACT

Real-time detection of COVID-19 using radiological images has gained priority due to the increasing demand for fast diagnosis of COVID-19 cases. This paper introduces a novel two-phase approach for classifying chest X-ray images. Deep Learning (DL) methods fail to cover these aspects since training and fine-tuning the model's parameters consume much time. In this approach, the first phase comes to train a deep CNN working as a feature extractor, and the second phase comes to use Extreme Learning Machines (ELMs) for real-time detection. The main drawback of ELMs is to meet the need of a large number of hidden-layer nodes to gain a reliable and accurate detector in applying image processing since the detective performance remarkably depends on the setting of initial weights and biases. Therefore, this paper uses Chimp Optimization Algorithm (ChOA) to improve results and increase the reliability of the network while maintaining real-time capability. The designed detector is to be benchmarked on the COVID-Xray-5k and COVIDetectioNet datasets, and the results are verified by comparing it with the classic DCNN, Genetic Algorithm optimized ELM (GA-ELM), Cuckoo Search optimized ELM (CS-ELM), and Whale Optimization Algorithm optimized ELM (WOA-ELM). The proposed approach outperforms other comparative benchmarks with 98.25 % and 99.11 % as ultimate accuracy on the COVID-Xray-5k and COVIDetectioNet datasets, respectively, and it led relative error to reduce as the amount of 1.75 % and 1.01 % as compared to a convolutional CNN. More importantly, the time needed for training deep ChOA-ELM is only 0.9474 milliseconds, and the overall testing time for 3100 images is 2.937 s.

15.
Biomed Signal Process Control ; 68: 102583, 2021 Jul.
Article in English | MEDLINE | ID: covidwho-1163451

ABSTRACT

Due to the unforeseen turn of events, our world has undergone another global pandemic from a highly contagious novel coronavirus named COVID-19. The novel virus inflames the lungs similarly to Pneumonia, making it challenging to diagnose. Currently, the common standard to diagnose the virus's presence from an individual is using a molecular real-time Reverse-Transcription Polymerase Chain Reaction (rRT-PCR) test from fluids acquired through nasal swabs. Such a test is difficult to acquire in most underdeveloped countries with a few experts that can perform the test. As a substitute, the widely available Chest X-Ray (CXR) became an alternative to rule out the virus. However, such a method does not come easy as the virus still possesses unknown characteristics that even experienced radiologists and other medical experts find difficult to diagnose through CXRs. Several studies have recently used computer-aided methods to automate and improve such diagnosis of CXRs through Artificial Intelligence (AI) based on computer vision and Deep Convolutional Neural Networks (DCNN), which some require heavy processing costs and other tedious methods to produce. Therefore, this work proposed the Fused-DenseNet-Tiny, a lightweight DCNN model based on a densely connected neural network (DenseNet) truncated and concatenated. The model trained to learn CXR features based on transfer learning, partial layer freezing, and feature fusion. Upon evaluation, the proposed model achieved a remarkable 97.99 % accuracy, with only 1.2 million parameters and a shorter end-to-end structure. It has also shown better performance than some existing studies and other massive state-of-the-art models that diagnosed COVID-19 from CXRs.

SELECTION OF CITATIONS
SEARCH DETAIL